EN FR
EN FR


Section: New Results

Computer Graphics

Content-Based Color Transfer

Participants : Fuzhang Wu, Weiming Dong, Yan Kong, Xing Mei, Jean-Claude Paul, Xiaopeng Zhang.

This paper presents a novel content-based method for transferring the colour patterns between images. Unlike previous methods that rely on image colour statistics, our method puts an emphasis on high-level scene content analysis. We first automatically extract the foreground subject areas and background scene layout from the scene. The semantic correspondences of the regions between source and target images are established. In the second step, the source image is re-coloured in a novel optimization framework, which incorporates the extracted content information and the spatial distributions of the target colour styles. A new progressive transfer scheme is proposed to integrate the advantages of both global and local transfer algorithms, as well as avoid the over-segmentation artefact in the result. Experiments show that with a better understanding of the scene contents, our method well preserves the spatial layout, the colour distribution and the visual coherence in the transfer process. As an interesting extension, our method can also be used to re-colour video clips with spatially-varied colour effects. [26]

Large-scale forest rendering: Real-time, realistic, and progressive

Participants : Xiaopeng Zhang, Weiming Dong.

Real-time rendering of large-scale forest landscape scenes is important in many applications, such as video games, Internet graphics, and landscape and cityscape scene design and visualization. One challenge in the field of virtual reality is transferring a large-scale forest environment containing plant models with rich geometric detail through the network and rendering them in real time. We present a new framework for rendering large-scale forest scenes realistically and quickly that integrates extracting level of detail (LOD) tree models, rendering real-time shadows for large-scale forests, and transmitting forest data for network applications. We construct a series of LOD tree models to compress the overall complexity of the forest in view-dependent forest navigation. A new leaf phyllotaxy LOD modeling method is presented to match leaf models with textures, balancing the visual effect and model complexity. To progressively render the scene from coarse to fine, sequences of LOD models are transferred from simple to complex. The forest can be rendered after obtaining a simple model of each tree, allowing users to quickly see a sketch of the scene. To improve client performance, we also adopt a LOD strategy for shadow maps. Smoothing filters are implemented entirely on the graphics processing unit (GPU) to reduce the shadows' aliasing artifacts, which creates a soft shadowing effect. We also present a hardware instancing method to render more levels of LOD models, which overcomes the limitation of the latest GPU that emits primitives into only a limited number of separate vertex streams. Experiments show that large-scale forest scenes can be rendered with smooth shadows and in real time. [14]

Fast Multi-Operator Image Resizing and Evaluation

Participants : Weiming Dong, Xiaopeng Zhang, Jean-Claude Paul.

Current multi-operator image resizing methods succeed in generating impressive results by using image similarity measure to guide the resizing process. An optimal operation path is found in the resizing space. However, their slow resizing speed caused by inefficient computation strategy of the bidirectional patch matching becomes a drawback in practical use. In this paper, we present a novel method to address this problem. By combining seam carving with scaling and cropping, our method can realize content-aware image resizing very fast. We define cost functions combing image energy and dominant color descriptor for all the operators to evaluate the damage to both local image content and global visual effect. Therefore our algorithm can automatically find an optimal sequence of operations to resize the image by using dynamic programming or greedy algorithm. We also extend our algorithm to indirect image resizing which can protect the aspect ratio of the dominant object in an image. [16]

Easy modeling of realistic trees from freehand sketches

Participant : Xiaopeng Zhang.

Creating realistic 3D tree models in a convenient way is a challenge in game design and movie making due to diversification and occlusion of tree structures. Current sketch-based and image-based approaches for fast tree modeling have limitations in effect and speed, and they generally include complex parameter adjustment, which brings difficulties to novices. In this paper, we present a simple method for quickly generating various 3D tree models from freehand sketches without parameter adjustment. On two input images, the user draws strokes representing the main branches and crown silhouettes of a tree. The system automatically produces a 3D tree at high speed. First, two 2D skeletons are built from strokes, and a 3D tree structure resembling the input sketches is built by branch retrieval from the 2D skeletons. Small branches are generated within the sketched 2D crown silhouettes based on self-similarity and angle restriction. This system is demonstrated on a variety of examples. It maintains the main features of a tree: the main branch structure and crown shape, and can be used as a convenient tool for tree simulation and design. [21]

Real-time ink simulation using a grid-particle method

Participants : Shibiao Xu, Xing Mei, Weiming Dong, Xiaopeng Zhang.

This paper presents an effective method to simulate the ink diffusion process in real time that yields realistic visual effects. Our algorithm updates the dynamic ink volume using a hybrid grid-particle method: the fluid velocity field is calculated with a low-resolution grid structure, whereas the highly detailed ink effects are controlled and visualized with the particles. To facilitate user interaction and extend this method, we propose a particle-guided method that allows artists to design the overall states using the coarse-resolution particles and to preview the motion quickly. To treat coupling with solids and other fluids, we update the grid-particle representation with no-penetration boundary conditions and implicit interaction conditions. To treat moving "ink-emitting" objects, we introduce an extra drag-force model to enhance the particle motion effects; this force might not be physically accurate, but it proves effective for producing animations. We also propose an improved ink rendering method that uses particle sprites and motion blurring techniques. The simulation and the rendering processes are efficiently implemented on graphics hardware at interactive frame rates. Compared to traditional fluid simulation methods that treat water and ink as two mixable fluids, our method is simple but effective: it captures various ink effects, such as pinned boundaries and filament patterns, while still running in real time, it allows easy control of the animation, it includes basic solid-fluid interactions, and it can address multiple ink sources without complex interface tracking. Our method is attractive for animation production and art design.

Image zooming using directional cubic convolution interpolation

Participant : Weiming Dong.

Image-zooming is a technique of producing a high-resolution image from its low-resolution counterpart. It is also called image interpolation because it is usually implemented by interpolation. Keys' cubic convolution (CC) interpolation method has become a standard in the image interpolation field, but CC interpolates indiscriminately the missing pixels in the horizontal or vertical direction and typically incurs blurring, blocking, ringing or other artefacts. In this study, the authors propose a novel edge-directed CC interpolation scheme which can adapt to the varying edge structures of images. The authors also give an estimation method of the strong edge for a missing pixel location, which guides the interpolation for the missing pixel. The authors' method can preserve the sharp edges and details of images with notable suppression of the artefacts that usually occur with CC interpolation. The experiment results demonstrate that the authors'method outperforms significantly CC interpolation in terms of both subjective and objective measures. [30]